93 research outputs found

    Neural Decision Boundaries for Maximal Information Transmission

    Get PDF
    We consider here how to separate multidimensional signals into two categories, such that the binary decision transmits the maximum possible information transmitted about those signals. Our motivation comes from the nervous system, where neurons process multidimensional signals into a binary sequence of responses (spikes). In a small noise limit, we derive a general equation for the decision boundary that locally relates its curvature to the probability distribution of inputs. We show that for Gaussian inputs the optimal boundaries are planar, but for non-Gaussian inputs the curvature is nonzero. As an example, we consider exponentially distributed inputs, which are known to approximate a variety of signals from natural environment.Comment: 5 pages, 3 figure

    Intrinsic gain modulation and adaptive neural coding

    Get PDF
    In many cases, the computation of a neural system can be reduced to a receptive field, or a set of linear filters, and a thresholding function, or gain curve, which determines the firing probability; this is known as a linear/nonlinear model. In some forms of sensory adaptation, these linear filters and gain curve adjust very rapidly to changes in the variance of a randomly varying driving input. An apparently similar but previously unrelated issue is the observation of gain control by background noise in cortical neurons: the slope of the firing rate vs current (f-I) curve changes with the variance of background random input. Here, we show a direct correspondence between these two observations by relating variance-dependent changes in the gain of f-I curves to characteristics of the changing empirical linear/nonlinear model obtained by sampling. In the case that the underlying system is fixed, we derive relationships relating the change of the gain with respect to both mean and variance with the receptive fields derived from reverse correlation on a white noise stimulus. Using two conductance-based model neurons that display distinct gain modulation properties through a simple change in parameters, we show that coding properties of both these models quantitatively satisfy the predicted relationships. Our results describe how both variance-dependent gain modulation and adaptive neural computation result from intrinsic nonlinearity.Comment: 24 pages, 4 figures, 1 supporting informatio

    Adaptive Filtering Enhances Information Transmission in Visual Cortex

    Full text link
    Sensory neuroscience seeks to understand how the brain encodes natural environments. However, neural coding has largely been studied using simplified stimuli. In order to assess whether the brain's coding strategy depend on the stimulus ensemble, we apply a new information-theoretic method that allows unbiased calculation of neural filters (receptive fields) from responses to natural scenes or other complex signals with strong multipoint correlations. In the cat primary visual cortex we compare responses to natural inputs with those to noise inputs matched for luminance and contrast. We find that neural filters adaptively change with the input ensemble so as to increase the information carried by the neural response about the filtered stimulus. Adaptation affects the spatial frequency composition of the filter, enhancing sensitivity to under-represented frequencies in agreement with optimal encoding arguments. Adaptation occurs over 40 s to many minutes, longer than most previously reported forms of adaptation.Comment: 20 pages, 11 figures, includes supplementary informatio

    Network adaptation improves temporal representation of naturalistic stimuli in drosophila eye: II Mechanisms

    Get PDF
    Retinal networks must adapt constantly to best present the ever changing visual world to the brain. Here we test the hypothesis that adaptation is a result of different mechanisms at several synaptic connections within the network. In a companion paper (Part I), we showed that adaptation in the photoreceptors (R1-R6) and large monopolar cells (LMC) of the Drosophila eye improves sensitivity to under-represented signals in seconds by enhancing both the amplitude and frequency distribution of LMCs' voltage responses to repeated naturalistic contrast series. In this paper, we show that such adaptation needs both the light-mediated conductance and feedback-mediated synaptic conductance. A faulty feedforward pathway in histamine receptor mutant flies speeds up the LMC output, mimicking extreme light adaptation. A faulty feedback pathway from L2 LMCs to photoreceptors slows down the LMC output, mimicking dark adaptation. These results underline the importance of network adaptation for efficient coding, and as a mechanism for selectively regulating the size and speed of signals in neurons. We suggest that concert action of many different mechanisms and neural connections are responsible for adaptation to visual stimuli. Further, our results demonstrate the need for detailed circuit reconstructions like that of the Drosophila lamina, to understand how networks process information

    Stimulus-dependent maximum entropy models of neural population codes

    Get PDF
    Neural populations encode information about their stimulus in a collective fashion, by joint activity patterns of spiking and silence. A full account of this mapping from stimulus to neural activity is given by the conditional probability distribution over neural codewords given the sensory input. To be able to infer a model for this distribution from large-scale neural recordings, we introduce a stimulus-dependent maximum entropy (SDME) model---a minimal extension of the canonical linear-nonlinear model of a single neuron, to a pairwise-coupled neural population. The model is able to capture the single-cell response properties as well as the correlations in neural spiking due to shared stimulus and due to effective neuron-to-neuron connections. Here we show that in a population of 100 retinal ganglion cells in the salamander retina responding to temporal white-noise stimuli, dependencies between cells play an important encoding role. As a result, the SDME model gives a more accurate account of single cell responses and in particular outperforms uncoupled models in reproducing the distributions of codewords emitted in response to a stimulus. We show how the SDME model, in conjunction with static maximum entropy models of population vocabulary, can be used to estimate information-theoretic quantities like surprise and information transmission in a neural population.Comment: 11 pages, 7 figure

    Factors Affecting Frequency Discrimination of Vibrotactile Stimuli: Implications for Cortical Encoding

    Get PDF
    BACKGROUND: Measuring perceptual judgments about stimuli while manipulating their physical characteristics can uncover the neural algorithms underlying sensory processing. We carried out psychophysical experiments to examine how humans discriminate vibrotactile stimuli. METHODOLOGY/PRINCIPAL FINDINGS: Subjects compared the frequencies of two sinusoidal vibrations applied sequentially to one fingertip. Performance was reduced when (1) the root mean square velocity (or energy) of the vibrations was equated by adjusting their amplitudes, and (2) the vibrations were noisy (their temporal structure was irregular). These effects were super-additive when subjects compared noisy vibrations that had equal velocity, indicating that frequency judgments became more dependent on the vibrations' temporal structure when differential information about velocity was eliminated. To investigate which areas of the somatosensory system use information about velocity and temporal structure, we required subjects to compare vibrations applied sequentially to opposite hands. This paradigm exploits the fact that tactile input to neurons at early levels (e.g., the primary somatosensory cortex, SI) is largely confined to the contralateral side of the body, so these neurons are less able to contribute to vibration comparisons between hands. The subjects' performance was still sensitive to differences in vibration velocity, but became less sensitive to noise. CONCLUSIONS/SIGNIFICANCE: We conclude that vibration frequency is represented in different ways by different mechanisms distributed across multiple cortical regions. Which mechanisms support the “readout” of frequency varies according to the information present in the vibration. Overall, the present findings are consistent with a model in which information about vibration velocity is coded in regions beyond SI. While adaptive processes within SI also contribute to the representation of frequency, this adaptation is influenced by the temporal regularity of the vibration

    Learning Priors for Bayesian Computations in the Nervous System

    Get PDF
    Our nervous system continuously combines new information from our senses with information it has acquired throughout life. Numerous studies have found that human subjects manage this by integrating their observations with their previous experience (priors) in a way that is close to the statistical optimum. However, little is known about the way the nervous system acquires or learns priors. Here we present results from experiments where the underlying distribution of target locations in an estimation task was switched, manipulating the prior subjects should use. Our experimental design allowed us to measure a subject's evolving prior while they learned. We confirm that through extensive practice subjects learn the correct prior for the task. We found that subjects can rapidly learn the mean of a new prior while the variance is learned more slowly and with a variable learning rate. In addition, we found that a Bayesian inference model could predict the time course of the observed learning while offering an intuitive explanation for the findings. The evidence suggests the nervous system continuously updates its priors to enable efficient behavior

    The Natural Variation of a Neural Code

    Get PDF
    The way information is represented by sequences of action potentials of spiking neurons is determined by the input each neuron receives, but also by its biophysics, and the specifics of the circuit in which it is embedded. Even the “code” of identified neurons can vary considerably from individual to individual. Here we compared the neural codes of the identified H1 neuron in the visual systems of two families of flies, blow flies and flesh flies, and explored the effect of the sensory environment that the flies were exposed to during development on the H1 code. We found that the two families differed considerably in the temporal structure of the code, its content and energetic efficiency, as well as the temporal delay of neural response. The differences in the environmental conditions during the flies' development had no significant effect. Our results may thus reflect an instance of a family-specific design of the neural code. They may also suggest that individual variability in information processing by this specific neuron, in terms of both form and content, is regulated genetically

    Learning with a network of competing synapses

    Get PDF
    Competition between synapses arises in some forms of correlation-based plasticity. Here we propose a game theory-inspired model of synaptic interactions whose dynamics is driven by competition between synapses in their weak and strong states, which are characterized by different timescales. The learning of inputs and memory are meaningfully definable in an effective description of networked synaptic populations. We study, numerically and analytically, the dynamic responses of the effective system to various signal types, particularly with reference to an existing empirical motor adaptation model. The dependence of the system-level behavior on the synaptic parameters, and the signal strength, is brought out in a clear manner, thus illuminating issues such as those of optimal performance, and the functional role of multiple timescales.Comment: 16 pages, 9 figures; published in PLoS ON

    Towards a General Theory of Neural Computation Based on Prediction by Single Neurons

    Get PDF
    Although there has been tremendous progress in understanding the mechanics of the nervous system, there has not been a general theory of its computational function. Here I present a theory that relates the established biophysical properties of single generic neurons to principles of Bayesian probability theory, reinforcement learning and efficient coding. I suggest that this theory addresses the general computational problem facing the nervous system. Each neuron is proposed to mirror the function of the whole system in learning to predict aspects of the world related to future reward. According to the model, a typical neuron receives current information about the state of the world from a subset of its excitatory synaptic inputs, and prior information from its other inputs. Prior information would be contributed by synaptic inputs representing distinct regions of space, and by different types of non-synaptic, voltage-regulated channels representing distinct periods of the past. The neuron's membrane voltage is proposed to signal the difference between current and prior information (“prediction error” or “surprise”). A neuron would apply a Hebbian plasticity rule to select those excitatory inputs that are the most closely correlated with reward but are the least predictable, since unpredictable inputs provide the neuron with the most “new” information about future reward. To minimize the error in its predictions and to respond only when excitation is “new and surprising,” the neuron selects amongst its prior information sources through an anti-Hebbian rule. The unique inputs of a mature neuron would therefore result from learning about spatial and temporal patterns in its local environment, and by extension, the external world. Thus the theory describes how the structure of the mature nervous system could reflect the structure of the external world, and how the complexity and intelligence of the system might develop from a population of undifferentiated neurons, each implementing similar learning algorithms
    corecore